poison frog
Reviews: Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
Based on the information provided, I would encourage the following changes be made to the paper in addition to what was suggested in the original reviews: * Improve comparison with related work by Suciu et al. and Koh and Liang (using points on the more realistic treat model and also experimental results mentioned in your feedback). They also don't require the presence of a trigger at test time (as done in what is commonly referred to as a "backdoor"). To the best of my knowledge, it includes pointers to relevant prior work both on shallow and deep learning algorithms. For these reasons, I think the manuscript is original and should contribute to increased visibility of poisoning in the adversarial ML community. Furthermore, the manuscript is well written and easy to follow.
Feed Me: Robotic Infiltration of Poison Frog Families
Chen, Tony G., Goolsby, Billie C., Bernal, Guadalupe, O'Connell, Lauren A., Cutkosky, Mark R.
We present the design and operation of tadpole-mimetic robots prepared for a study of the parenting behaviors of poison frogs, which pair bond and raise their offspring. The mission of these robots is to convince poison frog parents that they are tadpoles, which need to be fed. Tadpoles indicate this need, at least in part, by wriggling with a characteristic frequency and amplitude. While the study is in progress, preliminary indications are that the TadBots have passed their test, at least for father frogs. We discuss the design and operational requirements for producing convincing TadBots and provide some details of the study design and plans for future work.
- North America > United States > California > Santa Clara County > Palo Alto (0.05)
- North America > United States > New York (0.05)
- South America > Peru (0.04)
- (4 more...)
Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
Shafahi, Ali, Huang, W. Ronny, Najibi, Mahyar, Suciu, Octavian, Studer, Christoph, Dumitras, Tudor, Goldstein, Tom
Data poisoning is an attack on machine learning models wherein the attacker adds examples to the training set to manipulate the behavior of the model at test time. This paper explores poisoning attacks on neural nets. The proposed attacks use clean-labels''; they don't require the attacker to have any control over the labeling of training data. They are also targeted; they control the behavior of the classifier on a specific test instance without degrading overall classifier performance. For example, an attacker could add a seemingly innocuous image (that is properly labeled) to a training set for a face recognition engine, and control the identity of a chosen person at test time.